NPS-CS-09-002 NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA Center for Joint Services Electronic Warfare Technical Report A COMPARISON OF NONLINEAR FILTERS AND MULTI- SENSOR FUSION FOR TRACKING BOOST-PHASE BALLISTIC MISSILES by Kyungsu Kim Phillip E. Pace Robert G. Hutchins James B. Michael January 2009 Approved for public release; distribution unlimited. Prepared for: U.S. Missile Defense Agency 1 FEDDOCS D 208.14/2: NPS-CS-09-002 xCP V FEDDOCS «^*G* D 208.14/2: y$^?r s> \& NPS-CS-09-002 THIS PAGE INTENTIONALLY LEFT BLANK DUDLEY KNOX LIBRARY NAVAL POSTGRADUATE SCHOOL MONTEREY CA 93843-51 01 NAVAL POSTGRADUATE SCHOOL Monterey, California 93943-5000 Daniel T. Oliver Leonard A. Ferrari President Executive Vice President and Provost This report was funded in part by U.S. Missile Defense Agency and the Agency for Defense Development (ADD), South Korea. Reproduction of all or part of this report is authorized. THIS PAGE INTENTIONALLY LEFT BLANK REPORT DOCUMENTATION PAGE Form Approved OMB ,Vo. 0704-0188 i App, Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503 I. AGENCY USE ONLY (Leave blank) 2. REPORT DATE Jan uary 2 009 3. REPORT TYPE AND DATES COVERED Technical report ». TITLE AND SUBTITLE: A COMPARISON OF NONLINEAR FILTERS \ND MULTI-SENSOR FUSION FOR TRACKING BOOST-PHASE BALLISTIC MISSILES 6. AUTHOR(S) Kyungsu Kim, Phillip E. Pace, Robert G. Hutchins, James B. Michael 5. FUNDING NUMBERS MD7080101P0630 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Center for Joint Services Electronic Warfare Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER NPS-CS-09-002 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) MISSILE DEFENSE AGENCY Washington, DC 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this report are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. 12a. DISTRIBUTION /AVAILABILITY STATEMENT Approved for public release; distribution is unlimited. 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) This report studies two aspects of tracking ballistic missiles during boost phase. The first part compares the performance of several nonlinear filtering algorithms in tracking a single target: the extended Kalman filter (EKF); the unscented Kalman filter (UKF); the particle filter (PF); and, the particle filter with UKF update (UPF). Measurements are range, azimuth and elevation. In the absence of measurement error, all algorithms work well except for the PF, which does not converge. With measurement noise (standard deviations of 10 meters and 1 de- gree) the EFK also performs poorly, while the UPF is the top performer (although it is also the most computation- ally intensive). The second part compares the extended information filter (E1F) with earlier work on track scoring to per- form sensor/data fusion in a multi-hypothesis framework. Here we find that the ELF handily outperformed other fusion algorithms based on track scoring that we tested. 14. SUBJECT TERMS Boost-phase Missile Defense, Impulse Modeling, IMPULSE, RF sensors, Multiple Hypotheses Tracking, Extended Kalman Filtering, Unscented Kalman Filtering, Particle Filtering, Unscented Particle Filtering, Multi-sensor fusion, Extended Information Filter 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF THIS PAGE Unclassified 19. SECURITY CLASSIFICATION OF ABSTRACT Unclassified 20. LIMITATION OF ABSTRACT Unlimited NSN 7540-01-280-5500 Standard Form 298 (Rev 2-89) Prescribed by ANSI Std 239-18 THIS PAGE INTENTIONALLY LEFT BLANK Approved for public release; distribution is unlimited. A COMPARISON OF NONLINEAR FILTERS AND MULTI-SENSOR FUSION FOR TRACKING BOOST-PHASE BALLISTIC MISSILES Kyungsu Kim Phillip E. Pace Robert G. Hutchins James B. Michael Center for Joint Services Electronic Warfare and the Department of Electrical and Computer Engineering at the NAVAL POSTGRADUATE SCHOOL January 2009 in ABSTRACT This report studies two aspects of tracking ballistic missiles during boost phase. The first part compares the performance of several nonlinear filtering algorithms in tracking a single target: the extended Kalman filter (EKF); the unscented Kalman filter (UKF); the particle filter (PF); and, the particle filter with UKF update (UPF). Meas- urements are range, azimuth and elevation. In the absence of measurement error, all al- gorithms work well except for the PF, which does not converge. With measurement noise (standard deviations of 10 meters and 1 degree) the EFK also performs poorly, while the UPF is the top performer (although it is also the most computationally intensive). The second part compares the extended information filter (EIF) with earlier work on track scoring to perform sensor/data fusion in a multi-hypothesis framework. Here we find that the EIF handily outperformed other fusion algorithms based on track scoring that we tested. IV TABLE OF CONTENTS I. INTRODUCTION 1 II. SENSOR MODELS AND MULTIPLE HYPOTHESES TRACKING 4 A. GENERATING BALLISTIC MISSILE PROFILE 4 B. SENSOR MODELS 4 C. MULTIPLE HYPOTHESES TRACKING 11 1. Contacts, Targets, Scans and Associations 11 2. MHT Implementation 14 III. NON-LINEAR FILTERS FOR TRACKING 19 A. EXTENDED KALMAN FILTER (EKF) 19 B. UNSCENTED KALMAN FILTER (UKF) 19 C. PARTICLE FILTER (PF) 24 D. UNSCENTED PARTICLE FILTER (UPF) 25 IV. SENSOR DATA FUSION 29 A. INTRODUCTION TO INFORMATION FILTER 30 B. LINEAR INFORMATION FILTER 31 C. EXTENDED INFORMATION FILTER 33 D. DISTRIBUTED AND DECENTRALIZED DATA FUSION SYSTEMS 35 1. Data Fusion Architecture 36 2. Proposed Fusion Architecture 43 V. SIMULATION RESULTS 44 A. MODELS OF TARGET MOTION AND RADAR MEASUREMENTS 46 B. COMPARISON OF PERFORMANCE OF NONLINEAR FILTERS 50 C. MULTI-SENSOR FUSION 58 VI. CONCLUSION 75 VII. FUTURE WORKS 76 LIST OF FIGURES Figure 1. Figure 2. Figure 3. Figure 4. Figure 5. Figure 6. Figure 7. Figure 8. Figure 9. Figure 10. Figure 1 1 . Figure 12. Figure 13. Figure 14. Figure 15. Figure 16. Figure 17. Figure 18. Figure 19. Figure 20. Figure 2 1 . Figure 22. Figure 23. Figure 24. Figure 25. Figure 26. Main functional area implemented in this study to include the IMPULSE@ threat profile, the RF sensor models, and the multiple hypothesis tracking algorithm 5 Multiple missile attack directed toward the American continent 5 Radar Sensor 1 SNR versus time while observing BM1 8 Calculation of the incident angle 9 9 Calculated the incident angle 9 10 Polar plot of the Ballistic Missile RCSG 10 Linear plot of the Ballistic Missile RCS 1 1 BSingle target tracking (in 2-D) illustrating measurement-to-target state prediction pairing, and next-state prediction correectin 12 Multiple-target tracking (2-D) scenario 14 Schematic diagram of the Unscented Transformations 20 Single Level Hierarchical Sensor Tracking System 37 Multiple Level Hierarchical Multiple Sensor Tracking System 38 Blackboard Architecture in Data Fusion 39 Decentralized data fusion system implemented with a point-to-point communication architecture 41 A decentralzed data fusion system implemented with a broadcast, fully connected, communication architecture 42 GA decentralized data fusion system implented with a hybrid, broadcast and point-to-point, communication architecture 42 A Proposed simple hierarchical fusion architecture 43 Sx near-simultaneous launches of ballistic missiles (viewed looking north) ..44 Same profile as in in Figure 23: Six, near simultaneous, ballistic missile launches (looking East-to-West) and 7 RF sensors 44 Average distance error of seven sensors for 1 st upper stage trajectory v range azimuth elevation ' Average distance error of seven sensors for 1 st lower stage trajectory ^ range azimuth elevation /* Average distance error of seven sensors for 2 nd upper stage trajectory V range azimuth elevation / Average distance error of seven sensors for 2 nd lower stage trajectory ^ range azimuth elevation J Average distance error of seven sensors for 1 st lower stage trajectory (Grange =^meters,a a:inwlh =\\cT c „ evalHm ^\ ) 55 Average distance error of seven sensors for 1 st lower stage trajectory (UKFv.s. UPF- a mni ,,=\0meters,(T , =\°,a, , =1°) 55 v range ' azimuth ' elevation ' Average distance error of seven sensors for 1 st upper stage trajectory Grange =Wmeters,(T a:imulh =\ ,(T clcYaliim =} ) 56 VI Figure 27. Average distance error of seven sensors for 1 st upper stage trajectory (UKF v.s. UPF - a^=lto*eten,<r mmtk =l ,<r lkmlkm =l 9 ) 56 Figure 28. Distance error for the 1 st upper stage trajectoriy (Three fusion methods and Sensor with smallest error) 61 Figure 29. Distance error for the 1 st upper stage trajectoriy (All sensors v.s. Four sensors) 61 Figure 30. Distance error for the 1 st lower stage trajectoriy (Three fusion methods and Sensor with smallest error) 62 Figure 3 1 . Distance error for the l sl lower stage trajectoriy (All sensors v.s. Four sensors) 62 Figure 32. Distance error for the 2 n upper stage trajectoriy (Three fusion methods and Sensor with smallest error) 63 Figure 33. Distance error for the 2 nd upper stage trajectoriy (All sensors v.s. Four sensors) 63 Figure 34. Distance error for the 2 n lower stage trajectoriy (Three fusion methods and Sensor with smallest error) 64 Figure 35. Distance error for the 2 n lower stage trajectoriy (All sensors v.s. Four sensors) 64 Figure 36. Distance error for the 3 r upper stage trajectoriy (Three fusion methods and Sensor with smallest error) 65 Figure 37. Distance error for the 3 r upper stage trajectoriy (All sensors v.s. Four sensors) 65 Figure 38. Distance error for the 3 rd lower stage trajectoriy (Three fusion methods and Sensor with smallest error) 66 Figure 39. Distance error for the 3 r lower stage trajectoriy (All sensors v.s. Four sensors) 66 Figure 40. Distance error for the 4 1 upper stage trajectoriy (Three fusion methods and Sensor with smallest error) 67 Figure 4 1 . Distance error for the 4 th upper stage trajectoriy (All sensors v.s. Four sensors) 67 Figure 42. Distance error for the 4 th lower stage trajectoriy (Three fusion methods and Sensor with smallest error) 68 Figure 43. Distance error for the 4 th lower stage trajectoriy (All sensors v.s. Four sensors) 68 Figure 44. Distance error for the 5 th upper stage trajectoriy (Three fusion methods and Sensor with smallest error) 69 Figure 45. Distance error for the 5 l upper stage trajectoriy (All sensors v.s. Four sensors) 69 Figure 46. Distance error for the 5 th lower stage trajectoriy (Three fusion methods and Sensor with smallest error) 70 Figure 47. Distance error for the 5 th lower stage trajectoriy (All sensors v.s. Four sensors) 70 Figure 48. Distance error for the 6 th upper stage trajectoriy (Three fusion methods and Sensor with smallest error) 71 vn Figure 49. Distance error for the 6 th upper stage trajectoriy (All sensors v.s. Four sensors) 71 Figure 50. Distance error for the 6 th lower stage trajectoriy (Three fusion methods and Sensor with smallest error) 72 Figure 5 1 . Distance error for the 6 th lower stage trajectoriy (All sensors v.s. Four sensors) 72 Vlll LIST OF TABLES Table I. Output files from IMPULSE write Utility 6 Table 2. Column format for each Ballistic Missile file listed in Table 1 as the output from the General Write Utility GUI 6 Table 3. RF sensor parameter 8 Table 4. Sensor positions 46 Table 5. Average distance error and distance error at last time of three nonlinear filters for 12 trajectories (o^ =(T azlmulfl =<r elevalton = 0) 54 Table 6. Average distance error during the total time interval and average distance error at last time of three nonlinear filters for 12 trajectories Table 7. Average distance error of Monte-Carlo runs for 1 2 trajectories (Km) 73 Table 8. Average distance error at the last time of Monte-Carlo runs for 1 2 trajectories (Km) 74 IX I. INTRODUCTION The objective of this study is to investigate various algorithms for tracking multi- ple ballistic missiles within the boost phase. The ballistic missile behavior and flight pro- file are simulated using a proprietary add-on MATLAB module, specifically, IMPULSE©. Sensor modeling has been extensively discussed in previous work[16][20] and these same models are used here. In the first part, the estimation performance of the following nonlinear filters is compared: the extended Kalman filter (EKF), the unscented Kalman filter (UKF), particle filter (PF), and the particle filter with UKF proposal (UPF). Tracking ballistic missiles using infrared (bearing only) and radar (range and bearing) sensors requires a nonlinear filtering approach. The most significant nonlinearity results from the conversion of sensor measurements of range, azimuth and elevation into Cartesian coordinate estimates of the target state. This study compares the relative performances of various non-linear filtering methods applied to tracking multiple ballistic missile targets using radar measurements. The purpose of including the EKF in this study is due to the wide acceptance and fre- quent application of the EKF. The reason for including the UKF in this study is due to the rapidly growing support for the UKF over the EKF in many applications of nonlinear state estimation. The motivation behind including the PF is its appliance to many nonlin- ear estimation problems in recent research studies. Many recent papers have studied the ballistic missile tracking problem. Farian, et al. studied the problem of tracking a ballistic object in the reentry phase using the EKF, UKF and PF in [1] . They found that the results favored the EKF. Bruno and Pavlov also applied a variation of the PF to the case of a reentering ballistic object in [2]. Both of these studies used only a second order coordinate system involving position and velocity, and did not consider acceleration. Saulson and Chang used the UKF, EKF and Central Difference Filter (CDC) to track a ballistic missile in flight from a variety of platforms in [3]. Although they looked at the boost phase, their treatment considered a missile with constant acceleration. Siouris etal. [4] also performed an analysis of the incoming ballis- tic object using an extended interval Kalman Filter. The American Association of Scien- 1 tists looked at the Airborne laser problem in the boost phase but only used the EKF for tracking [5]. In addition, they did not consider tracking from the aircraft and the required sensor accuracy. Clemens and Chang examined the use of various nonlinear estimation filters, EKF, UKF, PF, GSPF (Gaussian Sum Particle Filter), for tracking a ballistic mis- sile during boost phase from a moving airborne platform [10]. In the second part, we propose a hierarchical data fusion architecture and method , when a number of different types of sensors are deployed in the vicinity of a ballistic missile launch. Sensor data fusion is the process of combining outputs from several sen- sors with information from other sources, such as information processing blocks, data- bases or knowledge bases, into one representational form. It is expected to achieve im- proved accuracy and more specific inferences than could be achieved by the use of a sin- gle sensor alone. In principle, fusion of data from several sensors provide significant ad- vantages over single source data. In addition to the statistical advantage gained by com- bining same-source data, the use of multiple types of sensors may increase the accuracy with which a quantity can be observed and characterized. In this study, seven active ground-based radar sensors are used to track the ballistic missile in the boost phase. We consider the Information Filter which is an efficient form to fuse the information from multiple sensors without information loss. To evaluate the performance of the Extended Information Filter (EIF) [12], we compare with the track score function proposed in Pat- sikas's previous work [20] in which the calculation of a track scoring function is used to identify the sensor with the best track file. All simulations have been implemented using MATLAB. THIS PAGE INTENTIONALLY LEFT BLANK II. SENSOR MODELS AND MULTIPLE HYPOTHESES TRACKING In order to detect the threat of a ballistic missile attack, there are a multitude of sensors available to provide early warning information if such an event were to occur. Two particular types of sensors are appropriate for the boost-phase missile tracking prob- lem; passive sensors such as IR, and active sensors such as radar. In this chapter, we in- vestigate surface-based sensors. Furthermore, the mathematical relationships, which de- termine the performance of each sensor, will be examined. The results will enable us to take the IMPULSE© missile flight information and introduce appropriate error as a means to model the sensed position information as reported by each sensor. In this sec- tion, the radar model and multiple hypotheses tracking (MHT) algorithm, originally de- veloped in [16], are reviewed. A. GENERATING BALLISTIC MISSILE PROFILES This research makes use of the IMPULSE© simulation tool for generating ballis- tic missile flight profiles which will later serve as the test data for the tracking algorithm. This software is a product of the National Air & Space Intelligence Center (NASIC) — an organization recognized as the cognizant analysts" representative for threat-missile plat- forms. IMPULSE© is used by engineers and researchers to conduct missile guidance testing, targeting studies and further enable users to study classified ballistic missile per- formances. In our study, this software enables us to include flight profiles more sophisti- cated than simple, constant-rate, parabolic motion-models. The missiles in the simulation display flight performances that one would expect a real-world counterpart to exhibit. IMPULSE© generates missile flight performances to reflect realistic missile physical dy- namics, e.g., non-linear velocity and acceleration profiles due to changing mass, staging events, and interaction with the Earth's atmosphere and gravitational field. Figure 2 de- picts six missiles, generated with the IMPULSE© software, which have been fired from suspected North Korean missile facilities. GEODETIC AZ/EL/RNG ECEF IMPULSE© Generated Ballistic Missile Profile N LAT/ LON/ ALT To AZ/ EL/ RNG AZ/ EL/ R to Local Vertcal Multiple Hypotheses Tracker Report Most Likely Trajectories Figure 1. Main functional area implemented in this study to include the IMPULSE@ threat profile, the RF sensor model, and the multiple hypothesis tracking algorithm. Ballistic Missile Trajectories Northwestern U.S. (beyond honzon. Jettisoned Boosters Figure 2. Multiple missile attack directed toward the American continent. The files listed in Table 1 contain the IMPULSE© output of each missile's flight as exported by the General Write Utility (GWU). Information regarding the use of this tool can be found in Appendix A. These files will serve as the input data to the sensor models and tracking algorithm. Missile Profile File Ballistic Missile 1 Upper Stage data FltlUStage.txt Ballistic Missile 1 Lower Stage data FltlLStage.txt Ballistic Missile 2 Upper Stage data Flt2UStage.txt Ballistic Missile 2 Lower Stage data FltlLStage.txt Ballistic Missile 3 Upper Stage data Flt3UStage.txt Ballistic Missile 3 Lower Stage data Flt3LStage.txt Ballistic Missile 4 Upper Stage data Flt4UStage.txt Ballistic Missile 4 Lower Stage data Flt4LStage.txt Ballistic Missile 5 Upper Stage data Flt5UStage.txt Ballistic Missile 5 Lower Stage data Flt5LStage.txt Ballistic Missile 6 Upper Stage data Flt6UStage.txt Ballistic Missile 6 Lower Stage data Flt6LStage.txt Table 1. Table 1. Output files from IMPULSE General Write Utility. Table 2 details the file format for each text file listed in Table 1. The simulation time is given in the first column. The latitude and longitude are represented in the second and third columns. The forth and fifth columns give the altitude in meters and the thrust magnitude in Newtons, respectively. Time, s Geodetic Latitude, rad Geodetic Longitude, rad Altitude, km Thurst Magnitude, N 0.70969 2.2372 0.02 2.03E+05 1 0.70969 2.2372 0.027182 2.03E+05 2 0.70969 2.2372 0.048894 2.03E+05 "5 J 0.70969 2.2372 0.085382 2.03E+05 4 0.70970 2.2372 0.13561 2.03E + 05 5 0.70970 2.2372 0.20043 2.04E+05 6 0.70970 2.2372 0.27988 2.04E+05 7 0.70971 2.2372 0.37395 2.04E+05 8 0.70971 2.2372 0.48265 2.04E+05 9 0.70972 2.2372 0.60599 2.04E+05 10 0.70973 2.2372 0.74394 2.05E+05 Table 2. Column format for each Ballistic Missile file listed in Table 1 as the output from the General Write Utility GUI. The ballistic missile positions in these files are given in the WGS84 geodetic co- ordinate system. Furthermore, the latitude and longitude values in these files are in units of radians with respect to the center of the Earth. The data can be easily converted to North/South, Degrees/Minutes/Seconds (N/S DD/MM/SS) and East/West, De- grees/Minutes/Seconds (E/W DD/MM/SS) format by using the radldeg.m function in the MATLAB mapping toolbox. B. SENSOR MODELS In this study, seven radars, which are placed at different locations relative to the launch position, use the same configuration in order to make the analysis easier to verify. The single pulse, signal to noise ratio (SNR) for each radar system can be expressed as: s/n= P ' G ; tX2(7 (o.i) (4;r) JcTBR 4 where T t =T a +T e , T s is the system temperature, T e is the receiver noise temperature and T a is the antenna noise temperature, which in the work reported here, is assumed to be 290K as was the assumption in [16]. P t denotes the peak transmitted power, G, is the an- tenna gain, a is the radar cross section of the object , r is the compressed pulse width, A, is the wavelength, B is the bandwidth of the receiver, k is Boltzmann's constant and R is the range to the target. Generally, the radar losses are also considered but are assumed to be zero for this study. The sensor-to-target slant range, R , is obtained in the simula- tion by using the MATLAB elevation function. This returns the actual target-to-sensor range. Due to the large unambiguous range required, the pulse repetition frequency (PRF) is modeled as PRF = 150Hz. Figure 1 1 shows the SNR computed at radar sensor 1 (RF1) as it observes the upper and lower stages of the ballistic missile in our simulation. An in- creasing SNR is observed as the missile passes near and relatively over the sensor during the boost phase of flight. This is observed in the first 100 seconds of the missile's flight. Staging occurs at 65 seconds where a second SNR value — the green line — is initiated. 100 150 Time.s Figure 3. Radar Sensor 1 SNR versus time while observing BM1 . The SNR value observed on the booster — the red plot — remains at a nominal val- ue of 8 dB as it never leaves the sensor's immediate field of view. Conversely, the upper stage of the missile continues its flight away from the sensor. The SNR is inversely pro- portional to the forth power of the range; as a result, the SNR decays rather rapidly be- yond 130 seconds of the simulation. As the SNR changes throughout the course of the missile trajectory, the values will be used in the computation of the error in the reported angular and range measurements. Table 3 lists the parameters of the radar considered in the simulation. Parameter Value Carrier Frequency,/ 10 GHz Peak Power, P t 1.0 MW Antenna Gain, G 42 dB Radar operating Bandwidth, B 15 MHz Receiver Noise Tempera- ture, T e 20 Table 3. Table 3. RF sensor parameters. 8 The radar cross section (RCS) of the missile during the flight is calculated by the program POFacets [17]. The missile used in the simulation is a sample unclassified mod- el. To determine RCS of the missile, it is assumed that the missile is a circular cylinder with a radius of 0.60 cm and a length of 20m. The angle between the missile's trajectory vector and the distance vector from the radar to missile, which is required in order to define the RCS function, is determined by the MATLAB function RFobservation by using the geometry of Figure 4. From Fig- ure 4, it is inferred that the angel of incidence is computed as the inner product between the position vector between the sensor and the missile and the tangent vector to the mis- sile trajectory curve. The vector of the curve is defined from the difference of the succes- sive points of the trajectory. The RCS determination is a 3D problem. In this simulation, it is assumed that the change of the angle takes place only on the x-y axes without taking the third dimension into account. r arge: (oallstic mssii?) ..•" n.r\e of rne trajecfoiy ,*•' incident angle LOS R distarce between target «rd radar « Radar Figure 4. Calculation of the incident angle 9 . Using the above geometry, the MATLAB function RFobservation calculates the incident angle #for the first radar sensor. A plot of this angle data is shown in Figure 5. 90.04 90.035 90.005 150 time (sec) Figure 5. Calculated incident angle 6 . Having determined the incident angle and using the POFacets code, the RCS of the ballistic missile is obtained as shown in Figure 6 and 7. Figure 6 is a polar plot of the RCS as a function of the angles of incidence. In the vicinity of angle 6 = 90° , which has been calculated from the previous analysis, the val- ues of the RCS are around 36 dBsm. ITOiffire a \V\EK TSa N v \ v N x 270 Figure 6. Polar plot of the Ballistic Missile RCS. 10 m w ;m w Moioclalic Vijb :ho:o ido^ GO ™ Figure 7. Linear plot of the Ballistic Missile RCS Figure 7 is a plot of the ballistic missile RCS with respect to the angle of inci- dence. From the above two plots, it is inferred that the RCS for an angle of incidence in the vicinity of 90° is equal to 36 dBsm. C. MULTIPLE HYPOTHESES TRACKING In this section, we briefly describe the MHT (Multiple Hypotheses Tracking) al- gorithm developed in [16]. 1. Contacts, Targets, Scans and Associations To explain multiple target tracking, several terms must be established to provide clarity. First, the definition of a contact is introduced. This is an observation that con- sists of a detection — by a sensor — and its corresponding measurement. Commonly, a de- tection occurs when the signal-to-noise ratio of a sensor meets or exceeds a predefined threshold. The RF signal return from the missile was such that it enabled the sensor to detect the object. Furthermore, the measurement corresponding to this detection consists of the position of the object. In limiting the sensor responses to contacts, a restriction is imposed such that a signal return is of interest only when the object meets a predeter- 11 mined criterion. This allows the contact to be "flagged" for further examination. For the remainder of this study, the term measurement will also be used interchangeably to mean contact. These terms, as well as the next several terms given, are illustrated in Figure 8 for a single target tracking problem. a prion target-/ o scan ft-1 next state prediction, -O """ " new measurement-//? (or contact) scan k next state prediction ► ® scan A+1 Figure 8. Single target tracking (in 2-D) illustrating measurement-to-target state predic- tion pairing, and next-state prediction correction. A target is a measurement from a previous sample time, which has been flagged and declared an object of interest; hence, the term, a priori target, is used. This object's information, or track file, may consist of position, velocity and acceleration parameters. Certain characteristics of a measurement are "screened" and, if they fall within predeter- mined values, the measurement is declared a target and stored in a database to await fur- ther information update. A contact's velocity information, for instance, may be used to mark the measurement for further observation. The term scan will be used to refer to successive time periods. Allowable meas- urements and targets will now be further restricted to specific time scans. Finally, the term association is used to describe the pairing of a measurement, in a present scan, with that of a target in a previous time scan. It is important to note that there must be no am- biguity in a target-to-measurement pairing from one time period to the next time period. There are two essential assumptions made in a single target tracking problem. First, there is one and only one target present in the observation region. Secondly, all sen- sor observations in successive scans are generated by the same target. Given the a priori target, j, as in Figure 8, the problem is approached as follows: based on prior information about the target, a next-state prediction is generated. This is given by 12 ** + i =F k x k +a> k where x k+i is target next-state vector (prediction), F k is the known state transition matrix, co k is the plant-noise associated with the target and k is the time index. Next, a measure- ment is taken in the ensuing scan. Since the measurement is likely to originate from the target that caused the contact in the previous scan, a direct comparison may be made be- tween the new observed contact and the expected next-state prediction. The difference between the observed measurement and the expected prediction is referred to as the inno- vation The algorithm which computes the next-state prediction is updated with the new observed information. Adjustments are made to improve follow-on estimations. The al- gorithm eventually stabilizes such that each next-state prediction approaches the actual measurement. Obviously, tracking is straight-forward when there is no uncertainty about the origin of successive measurements and when targets do not accelerate in unpredict- able ways. On the other hand, in a multiple-target environment, for any given scan, the sen- sor responses — measurements — may be due to any target. Thus, each possible move- ment of established targets must be hypothesized in the observation space. The primary difficulty in multiple target tracking is correctly assigning successive contacts to a corre- sponding established target's next state prediction as shown in Figure 9. This process of making possible assignments is referred to as measurement-to-target association. 13 a priori target-1 o a priori target-2 o scan k-1 new measurement-1 target-1 ,0. - state prediction ® target-2 state prediction o new measurement-2 scan k next state prediction ??? next state prediction ??? scan k+1 Figure 9. Multiple-target tracking (2-D) scenario. An illustration of the difficulty in cor- rectly pairing new measurements with their state prediction so subsequent predictions converge on the true track. This is an ideal place to introduce the term association hypothesis in which each hypothesis attempts to provide a likely explanation as to the source of each new meas- urement. Hence, we have the algorithm's namesake — MHT. An association hypothesis maps each measurement to a possible target's next- state prediction. Furthermore, if the association of contacts were always obvious, the problem would simplify to independent single target problems — the recursive process of measurement, state prediction, observation, and, finally, update of each target. However, due to the unpredictable nature of each target in a multiple-target space, the associations are ambiguous. 2. MHT Implementation The following paragraphs cover the implementation of the MHT using the terms and concepts that have been established in Section A. Furthermore, the strategy for solv- ing the association problem is presented in detail. - Generation of Association Hypotheses 14 The first step in the measurement-to-target association process is to form multiple feasible hypotheses. The assumptions made in this study are as follows: • Each target causes, at most, one measurement to be generated by the sen- sor. Clustering as presented in [21] is an example whereby multiple con- tacts with common measurements can be represented by a single contact. • Each measurement corresponds to only one known target. The possibility for sensor false alarms is not addressed in this study. • Sensor measurements are updated every second. • Targets, in this case ballistic missiles, do not deploy chaff or use electronic counter-measures. The following discussion is based upon [21] and provides the framework for the implementation of the MHT. Let Z k = \z k m ,m = 1,2,..., m h j denote the set of measure- ments m in scan time k. In the simulation, a single measurement z k m can be further de- composed such that z km ={^,„, y km , £ km ) where £, /, and £ denote the position of the measurement of interest in a predefined coordinate system. In the study of ballistic missiles, a local vertical coordinate system — north, east and up — centered about the sen- sor's location will be used. Let Z k = JZ 1 , Z 2 , Z 3 , ..., Z*} represent the cumulative set of measurements up to scan time k. Let Q. k ={&,, / = 1, 2, ..., /} represent the set of all measurement-to-target hypotheses at scan k. The index, / , denotes the hypothesis num- ber. This is the association of measurement in a current scan with a priori targets estab- lished in preceding scans. Furthermore, the hypothesis Q* of the kth scan is taken as the joint hypothesis formed from all prior hypotheses Q*~' and the association hypothesis for the current data scan. Using Figure 19 as an illustration, each hypothesis comprises two possible association pairings. The probability of each feasible hypothesis may now be derived. Let P k be the probability of hypothesis, Q k , given measurements up through time k. To further clar- ify, allow Q k to represent hypothesis-] where measurement-] is associated to target-] (implied by pairing with state estimate-]) and, within the same hypothesis, measurement- 15 2 is associated to target-2 (see Figure 9). The probability of this hypothesis being the correct pairing is represented by P k . Alternatively, hypothesis-2, Q 2 , with probability P 2 corresponds to the alternate hypothesis where measurement-] is associated with tar- get-2 while measurement-2 originates from target-]. Also, let P*" 1 represent the prob- ability Q*" 1 ; the past relationships. To successfully perform multiple hypothesis tracking where computational time must remain minimal, a recursive relationship must be estab- lished between P k and P*~ ] to avoid reevaluating all past data whenever a present scan set, Z k , is received. The association probability,^, is a conditional probability of Q* given the set of measurements, Z k , and the hypothesis, Q k . In [22], Nagarajan, Chidambara, and Sharma (NCS) give the relationship p k = p(q< \z(k)M k ) = p({rt;\v h }\z k ,n k ) where y/ h = \y/ mj , m = 1, 2, 3, ..., m k J and y/ mh represents the event that the /wth meas- urement corresponds (originated from) to they'th target as per association hypothesis y/ h . Furthermore, g denotes the global index while h denotes the hypothesis index. By using Bayes' rule, we can express the above equation as By setting WqM = C, a constant, the expression becomes p, k =±p(n k ;\v h \z k ) which can be further written as T>=±pW)p( n \a!?,z k ) 16 where we have again applied the Bayes' rule. Since the events y/ m that comprise y/ h are independent, we have ^=M"r)n^j"r.z 4 ) The right-most term is made possible by observing that the current scan association is af- fected only by Z k and all past data have been included through the term Q*" 1 . From [22], by defining /3(m,j h ,Cl k ; 1 ) =P(y/ mJh \& g ~\ Z<), we have '»* P k = ==! (4.1) C The term p(m,j h ,Q. k ~ x \ represents the probability that measurement m corresponds to target j h as per the present scan hypothesis y/ h and the past scan hypothesis Q*" 1 . To solve for the above probability, let J be the number of known a prior targets, and let j kl p j kl 2 , ..., j klJ have predicted next-state measurement vectors **,_/!» **,y2> •••» *k in an ^ corresponding innovation covariance matrices Lk,i> £*,2' •••' £* n" respectively, as per Q _1 retained from the previous scan k-\. When the new measurement z k m corresponds to a confirmed target j N whose existence is im- plied by the prior hypothesis Q*" 1 , then /?(w,/ /; ,Q* _1 ) is characterized by the probabil- ity density function [25] *(w£) = /2| |V2 exp \ 2n ) \u, mJ \ -T v-l 7 / v ' k.m.i k,m,) "k,m,j (4.2) where z m j is the measurement-to-target innovation, lZ k I is the determinant of the in- novation covariance, and n denotes the degrees of freedom of the measurement (in this study, n = 3 as the sensor's measurements consist of range, azimuth and elevation). Furthermore, the innovation is given by [23] [24] 17 z, = z, - H x . ... i (4.3) k,m,j k, m j j,k\k—l v / and represents the difference between the observed measurement, z k m , and the pre- dicted — or expected — measurement H x ] t|t _, . The observation matrix, H , is the used to select the elements of x } k , k _ x for comparison. The innovation (or residual) covari- ance given by The quantities z k m . and Z A m i above are obtained from the output of the extended Kal- man (EK.F) filter as outlined in Appendix B, with the update estimate covariance, P t , 4 _, ,. The noise covariance, R k , and the observation matrix H r k , are further explained in Eq- uations B.2 and B.3 of the same appendix. The appendix further provides a comprehen- sive review of the EKF equations and the motion model used to predict the next state po- sition Xj of the target . More details are in [16]. In summary, this chapter described the parameter of the RF sensor at sea level used in the simulation and the multiple hypothesis tracking algorithm based on previous work [16][20]. 18 III. NON-LINEAR FILTERS FOR TRACKING In this section we describe four recursive estimators (filters) to be used for track- ing a ballistic object. The EKF, UKF, PF and PF with UKF proposal are applied in this study. A description of each of the filters used is taken in turn in the following sections. A. EXTENDED KALMAN FILTER (EKF) The EKF linearizes nonlinear dynamic models using partial derivatives and the expected state vector in order to perform Kalman Filtering with a Taylor Series approxi- mation of the time and/or measurement updates. As a result of the functional lineariza- tion, the EKF provides the optimal linear, or Linear Minimum Mean Square Error (LMMSE), solution. It always approximates the probability density to be Gaussian and is therefore not well suited to handle non-Gaussian distributions such as bi-modal or heavily skewed cases. To do this, the EKF uses partial derivatives of the measurement matrix h with respect to the state x in an attempt to linearize the non-linear dynamic observation equations. The derivative is performed at each time step, k, resulting in the Jacobian ma- trix, designated H(k). Once the partial derivative is found, standard Kalman Filtering procedures are used to determine the predicted state and it's covariance. More details are in the [16]. B. UNSCENTED KALMAN FILTER (UKF) The Unscented Kalman Filter is a recursive MMSE estimator that addresses some of the approximation issues of the EKF. Because the EKF only uses the first order terms of the Taylor series expansion of the nonlinear functions, it often introduces large errors in the estimated statistics of the posterior distributions of the states. This is especially evident when the models are highly nonlinear and the Taylor series expansion error be- comes significant. Unlike the EKF, the UKF does not approximate the non-linear proc- ess and observation models, it uses the true nonlinear models and, instead, approximates the distribution of the state random variable. In the UKF the state distribution is still rep- 19 resented by a Gaussian random variable (GRV), but it is specified using a minimal set of deterministically chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear sys- tem, captures the posterior mean and covariance accurately to the 2 nd order for any nonlinearity, with errors only introduced in the 3 rd and higher orders [8]. The UKF uses the unscented transformation, or the "scaled unscented transforma- tion" implemented in this study, to generate deterministic sigma points to approximate a random variable's probability distribution given a true (or assumed) nonlinear function and the prior mean and covariance. The sigma points are chosen based on the prior mean and respective columns of the matrix square root of the prior covariance. The Choleski factorization used to solve for the upper triangular matrix square root of the expected state covariance requires that the covariance matrix be positive definite. Square-root im- plementations of the UKF perform more efficiently and without the positive definite re- quirement. The UKF is similar to Particle Filters in that it generates points about the mean estimate, but requires less computational cost due to deterministic sampling of the sigma points as apposed to the Particle Filter's random "particles". Unlike Particle Fil- ters, the UKF assumes a Gaussian distribution, but it is able to approximate heavy tailed distributions better than the EKF. For the second-order unscented filter, the estimates of the mean and covariance are accurate to the 2 nd order in general and to the 3 rd order for Gaussian priors. The second-order unscented filter generally requires at least 2n + 1 sigma points, while the fourth-order unscented filter requires at least 2n 2 + 1 sigma points and is able to estimate the mean to the 4 th order in general. A reduced sigma point im- plementation has only the minimum requirement that n+1 sigma points be used [9]. 20 Actual (sampling) UT Linearized (EKF) covanance true covanance Sigma points UT covanance transformed sigma points y = f(x) P y =A r T x A fix) Figure 10. Schematic diagram of the Unscented Transformations. To calculate the estimation of the state vector x(k+l) using the UKF one begins by developing a matrix /of 2L+1 sigma vectors X\ - eacn w >™ a weight W t , where L is the dimension of the state vector x (in this case L=9). The sigma vectors and their weights are calculated by Xo — x Z[ =x-( y J(L + A)P x ) i i=l,....L X x =x + (7(£ + A)P v ) ; i=L+l,...,2L W Q (m) =A/(L + A) W {c) =A/(L + Z) + (\-a 2 +j3) W™=W°=ll{2(L+X)} i=i,...2L (1) 21 where U(L + A)P X ) is the ith row or column of the matrix square root of [n x +k)P x . The weights are normalized. Each sigma point is propagated through the non-linear func- tion y i =SiP C i\ i=0,...,2L (2) The estimated mean and covariance of t are computed as the sum of the propagated points y=fw,y, o) ;=0 p,=t w i(y.-y)(y<-y) T /=o Implementation of the above algorithm with the current estimate and its covari- ance x(k | k) and P x (k \ k) follows as [1] 1. Compute sigma points C !> and their weights W t , (i=0,...,20) using (7) with a =0.001, /?=2, using x(k\k) and. P x (k\k) . The matrix square root in (1) is found using Cholesky factorization. 2. The sigma points are propagated using the state equation such that C l (k+i\k) = e[c,(k\k)]+Q(k) (4) 3. From these new points, compute the predicted state and covariance 22 1L i(k + \\k) = Y J W l C l (k + \\k) j=0 2L P(k + ]\k) = Y J W l [C l (k + \\k)-Z(k + \\k)~]* /=0 [C i (k + \\k)-x(k + \\k)J +Q(k) 4. Predict the observation sigma points using observation equation C,(k+l\k) = H[c,(k\k)'_ 5. The observation estimate and covariance follows y(k + \\k) = f j W i C l {k + \\k) ;=0 P >y (k + \\k) = ^W,[C,(k + \\k)-y(k + \\k)]. ;=0 [C, (k + \ | k)-y(k + \ \k)J +R(k + \) P xy (k + \\k) = f j W l [C l (k + \\k)-x(k + \\k)]. i=0 [c(*+ii*)-j>(*+ii*)] r (5) (6) (7) (8) (9) 6. Compute the UKF gain and updated state and state covariance similar to the standard Kalman Filter v ' w yy x(k + l\k + l) = x(k + \\k) + K(k + l)[y(k + l)-y(k + \\k)] P(k + l\k + \) = P{k\k + \)-K(k + ^^{k + 1) 23 Note that no explicit calculations of Jacobians or Hessians are necessary to implement this algorithm. The UKF requires computation of a matrix square root which can be im- plemented directly using a Cholesky factorization of order n x 1 6 . However, the covari- ance matrices can be expressed recursively, and thus the square-root can be computed in 2 order n x by performing a recursive update to the Cholesky factorization. So, not only does the UKF outperform the EKF in accuracy and robustness, it does so at no extra computational cost. C. PARTICLE FILTER (PF) The previous nonlinear filtering algorithms rely on Gaussian approximations. In this section, we shall present the particle filtering method. It is a set of state estimation methods using a weighted set of samples (particles) in a Monte Carlo methodology to ap- proximate the posterior distribution of the state [10][1 1]. A large set of particles are gen- erated and weighted to represent the posterior distribution of the state. Each particle is independently propagated through time in accordance with the dynamic state space model and the system process noise. After a new observation y(k+l) arrives, importance evaluation and resampling steps are used to assess the correlation of each individual par- ticle to the observation. New particles are generated to replace the old ones and the parti- cles weights are updated to represent the new posterior. At each step in the process, a sum of the particles is taken to calculate a new state estimate. This study uses a standard particle filtering algorithm to estimate the state of the missile trajectory. The posterior distribution of the state is assumed to be Gaussian due to the additive Gaussian process noise. The steps to the algorithm are: 1. Generate an initial set of N particles ix^-J -\,...,N\ from the prior p(x ) using Gaussian distribution about the estimated location of the mis- sile launch site with zero initial velocity and acceleration n each dimension. Each particle is given an equal normalized weight such that the i' particle weights is w' =\/ N , where i=l,...,N. In this study N = 5000, or 1 0000. 24 2. For each particle, (a) Importance sampling step: Perform importance sampling on the parti- cles. A set of likelihood function l' k are generated from the difference be- tween the propagated particles and the observation. The corresponding weights are recursively updated based this likelihood function. u>® n {i) w b — r, s (b) Selection step (resampling): Resampling of the new weights is accom- plished by comparing the particle weights to a threshold. Particles that fall below the threshold are discarded. Those that meet the criteria are resam- pled to keep the number of particles constant (c) The new state estimate is then calculated at each time by summing the particle values. Such that N Y - Vvi/'V'' 1 X k ~ 2^ W k X k (12) i=1 D. UNSCENTED PARTICLE FILTER (UPF) The unscented Kalman Filter is able to more accurately propagate the mean and covariance of the Gaussian approximation to the state distribution than the EKF. In com- parison to the EKF, the UKF tends to generate a more accurate estimate of the true co- variance of the state. Distributions generated by the UKF generally have a bigger support overlap with the true posterior distribution than the overlap achieved by the EKF esti- mates. This is in part related to the fact that the UKF calculates the posterior covariance accurately to the 3 rd order, whereas the EKF relies on a first order biased approximation. This makes the UKF a better candidate for more accurate proposal distribution generation within the particle filter framework. The UKF also has the ability to scale the approxima- 25 tion errors in the higher tailed distributions. Because the sigma point set used in the UKF is deterministically designed to capture certain characteristics of the prior distribution, one can explicitly optimize the algorithm to work with distributions that have heavier tails than Gaussian distributions, i.e. Cauchy or Student-t distributions. This characteris- tic makes the UKF very attractive for the generation of proposal distributions [7]. The algorithm of the Unscented Particle Filter is followings. 1. Initialization: t = a. For i = 1, ..., N, draw the states (particle) x' from the prior p(x )and set, " = £[*;■>] ^x — x JyX —x Q J *r=£[*r]=#. 1 ")' o o Pi" = E ? (,)a = E ( Y (')o -0)o \ A A 2. Fort= 1,2,... a. Importance sampling step • For i = 1, .. N: p(0 o" R Update the particle with the ULF: * Calculate sigma points: * Propagate particle into future (time update): 26 y d)a _ f( U)x (i)v\ -(/) ._ S^ W (n» (i)x A,t\t-\ J \/<t-\ 'A/-1 ) A /|/-l ~ Z~l J Ajj\t-\ p(') ._ V^ (c, ry ( ' ,j: -r (/) r v 0)x -7 ( " T *H Zs j \_Ajj\t-\ *t\t-\ ]\_/ij,t\t-\ A r|r-lJ 2n„ yU) -Ij(y 0)x y (,)n \ v {,) -YW (m) Y (,) J t\t-\ ~ n \At-\ •>/,,-] ) yt\i-\~ Zu j y-'I'-i Incorporate new observation (measurement update): p =Yw {c) [y ii) - v ( " ¥r ( " -v (,) 1 p =Yw (c) \y (,) -x u) ~\[y u) -v (0 1 7=0 K ,= p v Xk 3° =5i» +*,(>-, -35«.) - Sample X<" □ ? (x<<> | ^V-, , , J^, , ) = A^ (3c, ( " , ^" ' ) • For i =1, ... , N, evaluate the importance weights up to a normaliz- ing constant. b. Selection step • Multiply/Suppress particles (*o:/>^o./ j with high/low importance weights w t ' , respectively, to obtain N random particles [Xq :i , P 0:l j c. MCMC step (optional) 27 • Apply a Markov transitional kernel with invariant distribution p(4?l^)toobtain(4?^o?)- d. Output: The output is generated in the same manner as for the generic particle filter. 28 IV. SENSOR DATA FUSION Sensor data fusion is the process of combining outputs from sensors with informa- tion from other sensors, information processing blocks, database or knowledge bases, into one representational form. This technique is expected to achieve improved accuracy and more specific inferences than could be achieved by the use of a single sensor alone [14]. The applications of sensor data fusion include automated target recognition, guidance of autonomous vehicles, remote sensing, battlefield surveillance, automatic threat recogni- tion systems, monitoring of manufacturing processes, robotics and medical. The tech- niques employed in data fusion are drawn from diverse disciplines like digital signal processing, statistical estimation and control theory and classical numerical methods [14]. In principle, fusion of data from several sensors provides significant advantages over sin- gle source data. In addition to the statistical advantage gained by combining same-source data, the use of multiple types of sensors may increase the accuracy with which a quantity can be observed and characterized. Raw data from similar sensors may be directly com- bined if the sensors are measuring the same physical phenomenon. However feature/state vector or decision level fusion may be employed to combine data that come from non- commensurate sensors. While attempting to build a sensor data fusion system, the architecture that is cho- sen for fusion plays a significant role in deciding the algorithm for data fusion. The choice of architecture is made with a view to balancing computing resources, available communication bandwidth, desired accuracy and the capabilities of the sensors [14]. The three commonly used architecture are (i) centralized (ii) distrib- uted/decentralized and (iii) hybrid. The distributed/decentralized architectures have sev- eral inherent operational advantages that are dictated by the current trends towards modu- lar and autonomous systems [15]. For such architecture, information filtering which es- sentially tracks information about states is found to be most suitable. The information fil- ter is found to provide direct interpretation of node observations and contributions in terms of information. Prediction and estimation in terms of information reduce the com- putational load particularly for nonlinear systems in multi sensor systems. 29 The linear information filter (LIF) and extended information filter (EKF) are de- scried here. The information filter is a Kalman filter recast in terms of the information state vector and information matrix. The advantages of this formulation is that the esti- mation equations are computationally simpler than the measurement update equations of the Kalman filter, although prediction is more complex. Information filter is simpler to decouple and decentralize amongst a network of sensor nodes. A further advantage of the information filter is that it may be partitioned to provide a simple hierarchical estima- tion architecture based on the communication of the information terms from sensor nodes to a common fusion center and information state estimates from nodes to a common as- similation point. The proposed fusion architecture is also described in the end of this sec- tion. A. INTRODUCTION TO THE INFORMATION FILTER The diverse and sometimes conflicting information obtained from multiple sen- sors gives rise to the problem of how the information may be combined in a consistent and coherent manner. This is the data fusion problem. Multisensor fusion is the process by which information from multiple sensors is combined to yield a coherent description of the system under observation. A variety of information based data fusion algorithms have been employed in the recent work of Mutambara [12] and that of Mayika et al [13]. Most current sensor fusion algorithms consider systems described by linear dy- namics and observation models, whereas, most practical problems have nonlinear dynam- ics and sensor information nonlinearly dependent on the states that describe the environ- ment. Information-based estimation makes fully decentralized data fusion for nonlinear systems attainable. Consider a linear system with state-space representation, x(k) = F(k)x(k-\) + B(k)u(k-\) = w(k-\) (i) Where Xyk) is the state of interest at time k, F(k) is the state transition matrix form time (k-1) to k while, u(k) and B(k) are the input control vector and matrix, respectively. w(k) u -/V(0, £/(£)) is the associated process noise modeled as an uncorrelated, zero 30 mean and white sequence. The system is observed according to the linear discrete equa- tion z(k) = H(k)x(k) + v(k) (2) Where z(k) is the vector of observations made at time k. H(k) is the observation matrix or model and v(k) C N(0, R(k)) is the associated observation noise modeled as an un- corrected white sequence. The estimate of the state x(j) at time i given information up to and including time j is given by x(i\j) = E[x(i)\z(l),...z(j)] P(i\j) = E[(x(i)-x(i\j))(x(i)-x(i\j)) T \z(\\...z(j) B. LINEAR INFORMATION FILTER For a system described by equation ( 1 ) and being observed according to Equation (2), the Kalman filter provides a recursive estimate x(k | k) for the state x(k) at time k given all information up to time k in terms of the predicted state x(k \ k — 1) and the new observation z(k). The one-step-ahead prediction, x{k | k — 1) , is the estimate of the state at a time k given only information up to time (k-1). The information filter is es- sentially a Kalman Filter expressed in terms of measure of information about the parame- ters (state) of interest rather than direct state estimate and their associated covariances. This filter has also been called the inverse covariance form of the Kalman Filter. Assum- ing Gaussian noise and minimum mean squared error estimation gives the Fisher infor- mation matrix as, F(k) = p-\k\k) This information matrix or the inverse of the covariance matrix, is central to the filtering techniques. The general description of a linear system is given by: 31 X M = # X k + W k (3) Z k= HX k+ V k (4) Where ^ is the state vector at time k and <|> is the transition matrix. The input noise Wfc is assumed to be zero mean white Gaussian process with variance Q. The measure- ment noise V^ is assumed to be zero mean, white Gaussian with variance R. The in- formation state vector y(i|j) and the information matrix Y(i|j) are related to the state vec- tor and the covariance matrix P(i|j) by the equations: y(i\J) = P~'(i\J)x(i\j) (5) Y{i\j) = p-\i\j) (6) Defining i(k) = H T (k)R- l z(k) (7 ) As the information state contribution from an observation z(k), and I(k) = H T (k)R- l H(k) (8 ) As its associated information matrix and L(k\k-\) = p- l (k\k-l)F(k)P(k-l\k-\) (9) As a propagation matrix (independent of the observation made), linear filter is written in terms of the information state/matrix: Prediction: y(k\k-\) = L(k\k-\)y(k-\\k-\) ( 10) 32 Y(k\k-l) = \F(k)r\k-l\k-l)F T (k) + Q(k)7 ] (11) Estimation: y(k\k) = y(k-\\k-\) + i{k) ( i 2 ) Y(k\k) = Y(k\k-\) + I(k) (13) The advantage if this formulation is that the estimation equation (12) and (13) are compu- tationally simpler than the measurement update equations of the Kalman filter. C. EXTENDED INFORMATION FILTER The linear information filter can now be extended to a linearized estimation algo- rithm for nonlinear systems by using principles form both the derivations of the Informa- tion filter and the EKF. This generates a filter that predicts and estimates information about nonlinear state parameters given nonlinear observations and nonlinear system dy- namics. It is the Extended Information filter (EIF) and provides a solution to the nonlin- ear estimation problem. The EIF also has all the advantages of the Information filter and resolves some of the problems associated with the EKF. The estimation for nonlinear systems, in particular multisensory systems, is best carried out using information vari- ables rather than state variables. It is important to note that the EIF can't be extrapolated from the Information fil- ter and EKF in an obvious manner. This is because in the nonlinear case, the function of the Information filter depends on this separation, which is possible in the linear observa- tion equation. In a real data fusion problem, the state of interest could evolve in a nonlin- ear manner, in which case simple linear models cannot adequately describe such a nonlin- ear system. Also, sensor observations may not be linear functions of the states. The gen- eral nonlinear system could be described by the following models. 33 X(k) = f(X(k-\\k) + w(k) (14) Z(k) = h(X(k),k) + v(k) (15) Where X(k) is the state vector and w(k) is additive process noise vector at time step k. The nonlinear function f(.,k) is the nonlinear state transition function mapping the previ- ous state to the current state. Z is the observation vector at time k and h is the nonlinear observation model mapping the current state to the observations. The EIF algorithm is the followings: Prediction: y(k | k - 1) = Y(k | k - !)/(*, x(k-\\k- 1)) -1 r(jfc|t-i)=|_v/ x (t)r , (t-i|t-i)v/ x , (*)+fi(t) Estimation: y(k\k) = y(k-l\k-l) + i(k) Y(k\k) = Y(k\k-\) + I(k) The information state/matrix are given by I{k) = W(k)R-\k)Vh(k) (16) (17) (18) (19) (20) i(k) = Vh T x (k)R-\k)[v(k) + S/h x (k)x(k\k-\)] (2i) where v(k) is the innovation sequence given by v(k) = z(k)-h(x(k\k-\)) (22) The EIF described above has several attractive practical features: • The filter solves, in information space, the linear estimation problem for systems with both nonlinear dynamics and observations. In addition to having 34 all the attributes of the Information filter, it is a more practical and general filter. • The information estimation Equation (18) and (19) are computationally simpler Than the EKF estimation equations. This makes the partitioning of these equations for decentralized systems easy. • Although the EIF prediction Equation (16) and (17) are of the same apparent complexity as the EKF ones, they will be easier to distribute and fuse because of the orthonormality properties of information space parameters. • Since the EIF is expressed in terms of information matrices and vectors, it is easily initialized compared to the EKF. Accurate initialization is important where linearized models are employed. Some of the drawbacks inherent in the EKF still affect the EIF. These include the nontrivial nature of Jacobian matrix derivation (and computation) and linearization insta- bility. D. DISTRIBUTED AND DECENTRALIZED DATA FUSIOIN SYSTEMS The nature of data fusion is that there are number of sensors physically distributed around an environment. In a centralized data fusion system, raw sensor information is communicated back to a central processor, where the information is combined to produce a single fused picture of the environment. In a distributed data fusion system, each sen- sor has it's own local processor which can generally extract useful information from the raw sensor data prior to communication. This has the advantage that less information is normally communicated, the computational load on the central processor is reduced and the sensors themselves can be constructed in a reasonably modular manner. The degree to which local processing occurs at a sensor site varies substantially from simply valida- tion and data compression up to the full construction of tracks or interpretation of infor- mation locally. 35 While for many systems a centralized approach to data fusion is adequate, the in- creasing sophistication, functional requirements, complexity and size of data fusion sys- tems, coupled with the ever reducing cost of computing power argues more and more to- ward some form of distributed processing. The central issue in designing distributed data fusion systems is the development of appropriate algorithms which can operate at a num- ber of distributed sites in a consistent manner. This section begins with a general discussion of data fusion architectures and in- troduces the data fusion architecture used in this research. 1. Data Fusion Architecture Distributed data fusion systems may take many forms. At the simplest level, sen- sors could communicate information directly to a central processor where it is combined. Little or no local processing of information need take place and the relative advantage of having many sources of information is sacrificed to having complete centralized control over the processing and interpretation of this information. As more processing occurs lo- cally, so computational and communication burden can be removed from the fusion cen- ter, but at the cost of reduced direct control of low-level sensor information. Increasing intelligence of local sensor nodes naturally results in a hierarchical structure for the fu- sion architecture. This has the advantage of imposing some order on the fusion process, but the disadvantage of placing a specific and often rigid structure on the fusion system. Other distributed architectures consider sensor nodes with significant local ability to gen- erate tracks and engage in fusion tasks. Fully decentralized architectures have no central processor and no common communication system. In such systems, nodes can operate in a fully autonomous manner, only coordinating through the autonomous communication information. In a hierarchical structure, the lowest level processing elements transmit informa- tion upwards, through successive levels, where the information is combined and refined, until at the top level some global view of the state of the system is made available. Such hierarchical structures are common in many organizations and have many well-known 36 advantages over fully centralized systems; particularly in reducing the load on a central- ized processor while maintaining strict control over sub-processor operations. The hierarchical approach to systems design has been employed in a number of data fusion systems and has resulted in a variety of useful algorithms for combining in- formation at different levels of a hierarchical structure. The Figures 1 1 and 12 show hi- erarchical fusion systems. A hierarchical approach to the design of data fusion systems also comes with a number of inherent disadvantages. The ultimate reliance on some cen- tral processor or controlling level within the hierarchical means that reliability and flexi- bility are often compromised. Failure of this central unit leads to failure of the whole system, changes in the system often mean changes in both the central unit and in all re- lated sub-units. Further, the burden placed on the central unit in terms of combining in- formation can often still be prohibitive and lead to an inability of the design methodology to be extended to incorporate an increasing number of sources of information. "\ * ~N Sensors ZM :.:m u (k) U " Tracking Svstem I "■"racking System 2 'racking System N ■ 4 c.fh x M (k) Tracks r ■> Track Fus on Aigorithrs ... J Combined Track Figure Single Level Hierarchical Multiple Sensor Tracking System 37 z.(k) "A J Tracking System 1 ► c Zr(k) Tracking System n "NZrOO Tracking System 1 " A k„00 • Tracking System n <^s Grouo Picture Trac>\s "> Combined Picture Tracks c "N z-(k) c- Tracking System 1 M A • Tracking System n ^ ,. •• I V. J Site-level Track Fusion Figure 12. Multiple Level Hierarchical Multiple Sensor Tracking System The move to more distributed, autonomous, organizations is clear in many infor- mation processing systems. This is most often motivated by two main considerations; the desire to make the system more modular and flexible, and a recognition that a centralized or hierarchical structure imposes unacceptable overheads on communication and central computation. The migration to distributed system organizations is most apparent in Arti- ficial Intelligence (AI) application areas, where distributed AI has become a research area in its own right. Many of the most interesting distributed processing organizations have originated in this area. 38 K) Tracking System t ty Trac*. Fusion Algorithms ' I ~\ :.iKi TracMng System 2 •\ Backboard Medium v_ Situation Knowledge Base Remote Sensor -Knowledge Base I Track laentty Knowledge Base Figure 13. Blackboard Architecture in Data Fusion The above notable figure is the "Blackboard" architecture, originally developed in the Hearsay speech understanding program, but now widely employed in many areas of AI and data fusion research. A Blackboard architecture consists of a number of inde- pendent autonomous "agents". Each agent represents a source of expert knowledge or specified information facility or shared memory resource. This resource is called black- board. The blackboard is designed to closely replicate its physical analogue. Each agent is able to write information or local knowledge to this resource. Every agent in the sys- tem is able to read from this resource, in an unrestricted manner, any information which it considers useful in its current task. In principle, every agent can be made modular and new agents may be added to the system when needed without changing the underlying architecture or operation of the system as a whole. The flexibility of this approach to sys- tem organization has made the Blackboard architecture popular in a range of application domains. In data fusion, the Blackboard approach has been most widely used for knowl- edge-based data fusion systems in data interpretation and situation assessment. However, 39 the structured nature of tracking and identification problems does not lent itself to this anarchic organizational form. A decentralized data fusion system consists of a network of sensor nodes, each with its own processing facility, which together do not require any central fusion or cen- tral communication facility. In such a system, fusion occurs locally at each node on the basis of local observations and the information communicated from neighboring nodes. At no point is there a common place where fusion of global decisions are made. A decentralized data fusion system is characterized by three constraints: • There is no simple single central fusion center; no one should be central to the successful operation of the network. • There is no common communication facility; nodes cannot broadcast results and communication must be kept on a strictly node-to-node basis. • Sensor nodes do not have any global knowledge of sensor network topology; nodes should only know about connections in their own neighborhood. Figure 14 show three possible realizations of a decentralized data fusion system. The key point is that all these systems have no central fusion center. The constraints imposed provide a number of important characteristics for decen- tralized data fusion systems: • Eliminating the central fusion center and any common communication facility ensures that the system is scalable as there are no limits imposed by centralized computational bottlenecks or lack of communication bandwidth. • Ensuring that no node is central and that no global knowledge of the network topology is required for fusion means that the system can be made survivable to the on-line loss (or addition) of sensing nodes and to dynamic changes in the network structure. 40 • As all fusion processes must take place locally at each sensor site and no global knowledge of the network is required a priori, nodes can be constructed and programmed in a modular fashion. These characteristics give decentralized systems a major advantage over more traditional sensing architectures, particularly in defense applications. o Figure 14. Fusion Processor Blackboard Architecture in Data Fusion 41 Sensor Node ^ i3 < P /"A Sensor ^'-^ Fusic \ J> , — J / -> L (V Corrr j-i^to ns Mecii.n ~" "\ o Cj Figure 15. Blackboard Architecture in Data Fusion Sensor Pay oacs S$P External Corrmunication Figure 16. Blackboard Architecture in Data Fusion 42 2. Proposed Fusion Architecture The proposed hierarchical fusion architecture is a simple hierarchical estimation architecture based first on the communication of the information terms /(D) and /(D) from sensor nodes to a common fusion center. In this system, information-state contribu- tions are calculated at each sensor node and transmitted to a central fusion center where a common estimate is obtained by simple summation. All state predictions are undertaken at the central processor. Each sensor incorporates a full state model and takes observa- tions according to the Equation (2). They all calculate an information state contribution from their observations in terms of i t (k) and I t (k) . These are then communicated to the fusion center and are incorporated into global estimate through Equations (18) and (19). The information-state prediction is generated centrally using Equations (16) and (17) and the state estimate itself may be found at any stage from x(i\j) = Y~ ] (/| j)y(i\ j) . To avoid communicating predictions to nodes, any validation or data association should take place at the fusion center. Figure 1 7 shows the hierarchical architecture. Sensor Figure 17. A Proposed simple hierarchical fusion architecture 43 V. SIMULATION RESULTS In this section, we will describe the results of simulation studies on the perform- ance of nonlinear filter and sensor fusion using the proposed fusion architecture. The IMPULSE model and MATLAB are the tools used to implement the simulation. Specifi- cally, the results from the IMPULSE program are used by the MATLAB code from [16] in order to simulate the tracking problem. The number of trajectories used in the simula- tion is totally 12 including upper and lower stage for each of 6 ballistic missile trajecto- ries. A detailed description of the requirements (inputs) and capabilities (outputs) of the IMPULSE program are outlined in [16]. An unclassified model of the ballistic missile is used. Figure 23 shows the trajectories. Multiple Upper Stages Correctly Tracked Staging Event at 65 Seconds .etttoned Booster Stages Correctly Tracked Sensor Figure 18. Six near-simultaneous launches of ballistic missiles (viewed looking north). The delay ranges from four to ten second intervals. Missile staging occurs at 65 to 85 seconds, simulation time. The coordinate system here is local vertical (north-east-up) with respect to the sensor. 44 North-South (Y (km)) -200 -400 East-West (X (knr.)) 1000 Figure 19. Same profile as in Figure 23: Six, near simultaneous, ballistic missile launches (looking East-to-West) and 7 RF sensors Table 4 lists the positions of the sensors in the vicinity of the launched ballistic missile. The radars are placed at sea level. Sensors Position Latitude Longitude Altitude RF#1 131°46'44"E 40°50'45"N Sea level RF#2 132°46'44"E 41°40'45"N Sea level RF#3 131°00 , 00"E 40°00 , 00"N Sea level 45 RF#4 130°00'00"E 35°00'00"N Sea level RF#5 135°00'00"E 40°00'00"N Sea level RF#6 140°00'00"E 45°00 , 00"N Sea level RF#7 145°00'00"E 50°00 , 00"N Sea level Table 4. Sensor positions A.MODELS OF TARGET MOTION AND RADAR MEASUREMENTS A ballistic missile tracking algorithm is EIF based on the extended Kalman form. The system equations are the standard tracking equations h = K x k + v k where the missile state vector is given as X \ X x 2 X x 3 y X A y = x 5 y X 6 2 X l 7 ^ '■7 Aq The term F k is the linear state transition matrix: 46 F L 1 A A 2 2 1 A 1 1 A A 2 2 1 A 1 1 A A 2 2 1 A 1 The gravity matrix, G k , which accounts for the acceleration in z due to gravity with #=9.8, m-s , is G k = -g* A: 2 A and o) k is the plant noise with covariance O k 47 20 a=? 2 x 41 8 3 A^ 2 41 6 A^ 2 A 41 20 A^ 8 A^ 6 A^ 8 A^ 3 A^ 2 2 A 41 41 20 8 2 A <?R oi cj) where A is the sampling interval and q 1 is a scaling factor used to account for un- modeled target maneuver accelerations, and v k is the measurement noise with covariance R. = Although the missile dynamics in this system are linear, the measurement process is nonlinear. The sensor platform observes the missile positions through measurements in range, azimuth and elevation relative to the sensor. Let 48 K = Range Azimuth Elevation where range = jx 2 + y 2 + z 2 =yj. x 2 +y 2 +z 2 azimuth = tan" tan" elevation = tan" with = 7777 =V?^F These measurement equations are nonlinear and therefore must be converted us- ing a series expansion of the measurement equation h h . Applying the definition of the H Vk matrix, the gradient of h k is determined as follows: H V.k = dr(x) dr[x) dr(x) dr(x) dr(x) dr(x) dr(x) dr(x) dr[x) dx ] dx 2 dx 3 dx 4 dx 5 dx 6 dx 7 dx s dx 9 da(x) da(x) da (x) da(x) da(x) da(x) da(x) da(x) da^x) dx { dx 2 dx 3 dx 4 dx 5 dx 6 dx n dx % dx 9 de(x) de(x) de(x) de(x) de(x) de(x) 8e(x) de(x) de(x) dx, dx. dx. dx A dx< dx dx 7 dx s dx a which simplifies to 49 H T,k = Jxf+xJ+x] -x A x^+x^ ^ +x] +x] * -x A x;+r 4 -x.-x, ^xj+xj+x^ o o :^ , 4 : o o Finally, the approximate — linearized — system of equations for the state predict and mea- surement are the followings. *jw =F k x k +G k +a> k z k = H k x k+ v k B.COMPARISON OF PERFORMANCE OF NONLINEAR FILTERS In this section, we describe the experimental results of comparing the perform- ance of three nonlinear Filters, EKF, UKS, and UPF , since the PF did not converge. We used the distance error, which is the square root of sum for each coordinate distance er- ror, as a measure to evaluate the performance. In this study, two values for the measure- ment noise are chosen to have the following standard deviations: range azimuth elevation " 0-^=10 meters, a azimuth ' elevation Figures 21-24 show the average distance errors for two of the trajectories in the experiments over 30 Monte Carlo runs. Each sensor performs the estimation for each tra- jectory and then the results are averaged for the time period. The red line shows the dis- tance error calculated from each coordinates after EKF estimation is done. The green line shows the results for the UKF and the blue line is for the UPF. The diamond marker 50 is designating the distance error at the last time. As the graphs show, when standard de- viations for measurement noise are all set to 0, the EKF is the best. 51 0.12 0.1 008 006 0.04 0.02 Average Disance error of 7 sensors for U1 trajectory ; ^W^ ^■vV'Vv/ A ^A^' wJfw wly \< 50 100 150 time (sec) 200 250 300 Figure 20. Average distance error of seven sensors for 1 st upper stage trajectory V range azimuth elevation I Average Distance error of 7 sensors for L1 trajectory 0.05 0.04 S 0.03 - 0.02 - 0.01 W'^AtoVvVA^v 100 150 200 250 300 350 400 time (sec) Figure 2 1 . Average distance error of seven sensors for 1 st lower stage trajectory V ranee azi imuth elevation = 0) 52 Average Disance error of 7 sensors for U2 trajectory Figure 22. 150 time (sec) 300 Average distance error of seven sensors for 2 nd upper stage trajectory v range azimuth elevation ' Average Distance error of 7 sensors for 1_2 trajectory 0.05 0.04 - 8 0.03 -- 0.02 001 HAAM^l 100 150 200 250 300 350 400 time (sec) nd Figure 23. Average distance error of seven sensors for 2 n lower stage trajectory (a =a v range azu = 0) 53 Table 5 shows average distance error and final distance error at last time for al trajectories in number Trajectory EKF UKF UPF Average Last Average Last Average Last Ul 0.0089 0.0238 0.0258 0.0237 0.0223 0.0399 LI 0.0034 0.0001 0.0308 0.0264 0.0222 0.0203 U2 0.063 0.0381 0.0281 0.0506 0.0222 0.0334 L2 0.0026 0.0007 0.037 0.028 0.0247 0.0226 U3 0.0076 0.0231 0.0259 0.0226 0.0214 0.035 L3 0.0036 0.0008 0.0323 0.0283 0.0229 0.0152 U4 0.0166 0.2043 0.0315 0.0988 0.0292 0.0938 L4 0.0013 0.0007 0.029 0.0212 0.0201 0.0142 U5 0.0079 0.0559 0.037 0.0316 0.0273 0.0259 L5 0.0047 0.0006 0.0374 0.033 0.026 0.0175 U6 0.0067 0.0078 0.0282 0.0595 0.0225 0.041 L6 0.0037 0.0006 0.0385 0.0325 0.0261 0.0253 Table 5. Average distance error and distance error at last time of three nonlinear filters for 12 trajectories (cr i range azimuth elevation = 0) However, the performance of the EKF is degraded very badly for the second case. Figures 24-27 shows the average distance error for four of the trajectories. The second figure compares the distance error of only UPF and UPF. They work well without the performance degradation for the second case, as the results show. Even though the UPF 54 has the best performance of three filters, because of its time complexity, we can know that the UKF is the best estimation algorithm for ballistic missile tracking through . Figure 24. 80 70 60 „ 50 o S 40 30 20 10 Average Distance error of 7 sensors for L1 trajectory 1 1 1 1 r ' lfli : j iAl Ml EKF UKF UPF <W 50 100 150 200 250 300 time (sec) 1^ 350 400 Average distance error of seven sensors for 1 st lower trajectory (°W =W meterS >°aXmu,H = l " >***. = ,0 ) 55 Average Distance error of 7 sensors for L1 trajectory: UKF v.s UPF Figure 25. 150 200 250 300 350 400 time (sec) Distance error for the 1 st lower stage trajectory (UKF vs. UPF) (<W =^ meterS ^a, m u lh = P ****** = l0 ) Average Disance error of 7 sensors for U1 trajectory 140 120 -- 100 S 80 .2 60 Q 40 20 M ■i sJM J _j EKF UKF UPF i£ffi ; ? 50 100 150 200 time (sec) 250 300 Figure 26. Average distance error of seven sensors for l bt upper trajectory (<r we = ! Ometers, cr a:imulh = 1° ,(r elevallo „ = 1°) 56 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 001 Average Distance error of 7 sensors for U1 trajectory: UKF v.s UPF 50 100 150 time (sec) 200 250 300 Figure 27. Distance error for the l sl upper stage trajectory (UKF vs. UPF) (<W =^ n ^ters,cr irimulh = \\cr elevalinn = 1°) Trajectory EKF UKF UPF Average Last Average Last Average Last Ul 16.3177 89.0724 0.0244 0.0368 0.0149 0.0364 LI 10.2068 0.3315 0.0311 0.0259 0.0147 0.0089 U2 18.9756 160.6896 0.0281 0.0765 0.0182 0.0713 L2 25.2276 0.0115 0.1107 0.028 0.1007 0.026 U3 16.111 48.2313 0.0258 0.0301 0.0167 0.0447 L3 9.0777 3.0363 0.0324 0.0298 0.0168 0.0116 U4 28.1213 179.696 0.0339 0.1223 0.0197 0.0163 57 L4 2.6186 0.0115 0.0287 0.0214 0.0141 0.0055 U5 12.4995 36.0934 0.0302 0.0542 0.0206 0.0261 L5 15.1794 1.1139 0.0368 0.0332 0.0196 0.0251 U6 16.8421 38.389 0.0284 0.0353 0.0173 0.0141 L6 18.1476 7.1747 0.0385 0.0325 0.0206 0.0095 Table 6. Average distance error during the total time interval and average distance error at last time of three nonlinear filters for 12 trajectories ( °range = ] 0w ' ^azimuth = V > ^elevation = V ) C. MULTI-SENSOR FUSION We compare the performance of the sensor fusion method using the EIF algo- rithm. We used the distance error, which is the square root of the sum for each coordinate distance error, as a measure to evaluate the performance. In this study, the measurement noise is chosen to have the following standard deviation: " °W= 10 meters ■ <T ,, =1° azimuth ' & i , = 1° elevation In order to evaluate the performance of the sensor fusion method, the Track Score Function (TSF) proposed in the previous master's theses is considered. The likelihood ratio for a combination of data, taking the priori probability data into account, is given as [19]: A V 0) P(D/H )Po(H ) P F 58 Where //, is the true target hypothesis, H is the false target hypothesis, P T is the prob- ability of true target, P F is probability of false target, P(D\H l ) is the probability density function assuming that the //, is correct, ^o(//,) is priori probability of H ] . The performance of the tracking system should be independent of the behavior of the target. The system is evaluated for its ability to respond instantly to any change of the trajectory of the target either in velocity or direction. Taking K scans of data, where the measurement error for one is not related to the error from the previous scan, the likelihood ratio (a) can be given as the product of the likelihood ratio due to the kinematics ( a a ) and the likelihood ratio due to the signal ( a v ) is given: a(*) = a J]a a a s a:=i Where Wo) In this experiment, we just considered a signal-related likelihood ratio. Based on likelihood ratio of the signal-related data, the track score is given by [1 8, Chapter 6]: a ^ P{DetlH x )p{y s IDet,H,) s P(Det/H )p(y s /Det,H ) Taking both P D and P FA into account, (1) can be written as: P D p{yJDet,H x ) A s = P FA p(y s /DetM ) In the case of radar, when the SNR is available, the signal likelihood ratio is given by (1). The probability density function under H l us given by 59 . 1 + ^/(1 + 2/0) p{ylH x )= / _ — -^exp (1 + 0/2) "^ 1 + 0/2 Where y is SNR data and is average of the SNR. Probability density function under H us given by p(y/H ) = Qxp(-y) Figures 25-47 show the distance error in the experiments that completed 30 Monte Carlo runs. The yellow line shows the distance error calculated from the sensor with the smallest error for three coordinates, where each sensor tracks the same target re- spectively. The blue line and green line are related to the TSF. The one results from a sensor with the biggest TSF at each time interval. The other one shows the results that fuse the top four sensors according to the TSF. Finally, the red line fuses all seven sen- sors. Figure 26 shows only the results of fusing all sensors and four sensors. Two fig- ures of results are generated for each trajectory. The first figure shows the distance error graph of three fusion method and that of sensor with the smallest error at each time inter- val. The second one is to enlarge the result of only all sensor and four sensor fusion method to show the distance error. The diamond marker is designating the distance error at the Final time. 60 Distance error for U1 tracjectory E All sensors Four sensors TSF Sensor with smallest error Figure 28. Distance error for the l st upper stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for U1 tracjectory 3 0025 Figure 29. Distance error for the l bt upper stage trajectory (All sensors vs. Four sensors) 61 Distance error for L1 tracjectory 6- All sensors Four sensors TSF Sensor with smallest error Figure 30. Distance error for the 1 st lower stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for L1 tracjectory T I "0 50 100 150 200 250 300 Figure 3 1 . Distance error for the 1 st lower stage trajectory (All sensors vs. Four sensors) 62 5- E 3- Distance error for U2 tracjectory Figure 32. Distance error for the 2 n upper stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for U2 tracjectory Figure 33. 300 ,nd Distance error for the 2" upper stage trajectory (All sensors vs. Four sensors) 63 Distance error for L2 tracjectory E Figure 34. Distance error for the 2 n lower stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for l_2 tracjectory Figure 35. ,nd Distance error for the 2 lower stage trajectory (All sensors vs. Four sensors) 64 Distance error for U3 tracjectory 300 Figure 36. Distance error for the 3 r upper stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for U3 tracjectory 0.01 50 100 150 200 250 300 Time Figure 37. Distance error for the 3 rd upper stage trajectory (All sensors vs. Four sensors) 65 12 10 Distance error for l_3 tracjectory All sensors Four sensors TSF Sensor with smallest error \ti "OT^: rfctf \ W w 50 100 150 200 250 300 Time Figure 38. Distance error for the 3 rd lower stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for L3 tracjectory £ Figure 39. Distance error for the 3 rd lower stage trajectory (All sensors vs. Four sensors) 66 Distance error for U4 tracjectory V^/r — All sensors Four sensors TSF Sensor with smallest error Figure 40. Distance error for the 4 th upper stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for U4 tracjectory 300 Figure 41 . Distance error for the 4 th upper stage trajectory (All sensors vs. Four sensors) 67 Distance error for L4 tracjectory Time Figure 42. Distance error for the 4 th lower stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for L4 tracjectory £ 300 Figure 43. Distance error for the 4 th lower stage trajectory (All sensors vs. Four sensors) 68 12 10 Distance error for U5 tracjectory I 6 _l All sensors Four sensors TSF Sensor with smallest error i-| , v - . 1 ■■■■•- -',.. ~— >...'j-^— ^^H^AV 150 200 250 300 Time Figure 44. Distance error for the 5 l upper stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for U5 tracjectory I Figure 45. :th Distance error for the 5 upper stage trajectory (All sensors vs. Four sensors) 69 Distance error for 15 tracjectory £ All sensors Four sensors TSF Sensor with smallest error Figure 46. Distance error for the 5 th lower stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for l_5 tracjectory Figure 47. Distance error for the 5 th lower stage trajectory (All sensors vs. Four sensors) 70 Distance error for U6 tracjectory E All sensors Four sensors TSF Sensor with smallest error V" Figure 48. Distance error for the 6 th upper stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for U6 tracjectory 0.015 Figure 49. Distance error for the 6 th upper stage trajectory (All sensors vs. Four sensors) 71 Distance error for LB tracjectory I Figure 50. Distance error for the 6 th lower stage trajectory (Three fusion methods and Sensor with smallest error) Distance error for L6 tracjectory Figure 5 1 . Distance error for the 6 th lower stage trajectory (All sensors vs. Four sensors) 72 Tables 7 and 8 shows average distance error during total time period for three sen- sor fusion methods and sensor with the smallest error at each time interval. The result of fusion of all sensors or four sensors is better than the result of sensor with the best per- formance. Also, we can see that the performance of all sensor fusion is only slightly bet- ter than that of four sensor fusion. In conclusion, we can get good performance through sensor fusion based on the Extended Information Filter and can overcome the drawbacks of the Extended Kalman Filter in terms of performance and execution time. Trajectory All sensors Four sensors TSF (Track Score Function) Sensor with the smallest error Ul 0.0225 0.0238 3.2469 1.4361 U2 0.0385 0.0412 3.5311 1.5719 U3 0.0406 0.0437 3.1981 1.5035 U4 0.0294 0.0311 3.9471 1.532 U5 0.0887 0.0913 5.1942 1.8709 U6 0.0177 0.0184 3.4899 1.5368 LI 0.0444 0.0468 3.8185 1.6734 L2 0.0644 0.0681 4.327 1.8802 L3 0.0986 0.0996 4.9319 1.6297 L4 0.0208 0.0218 3.5549 1.6639 L5 0.0359 0.0425 3.3999 1.4595 L6 0.0625 0.0598 4.4784 1.8106 Table 7. Average distance error of Monte-Carlo runs for 12 trajectories (Km) 73 Trajectory All sensors Four sensors TSF (Track Score Function) Sensor with the smallest error Ul 0.0165 0.0302 6.1695 3.0208 U2 0.0434 0.0667 6.7872 2.8399 U3 0.0407 0.0397 6.7577 3.3037 U4 0.0297 0.03087 6.5375 3.1721 U5 0.0915 0.1157 10.9499 3.5601 U6 0.0166 0.0232 6.2319 2.8892 LI 0.0273 0.0294 6.3306 2.8978 L2 0.033 0.0439 7.7776 3.4658 L3 0.0581 0.1807 10.1092 3.4979 L4 0.0104 0.0237 5.4433 3.3111 L5 0.0393 0.0281 5.9995 2.9402 L6 0.056 0.1149 8.249 3.4377 Table 8. Average distance error at the last time of Monte-Carlo runs for 12 trajectories (Km) 74 VI. CONCLUSION This study has evaluated the performance of nonlinear filters, the Extended Kal- man Filter, the Unscented Kalman Filter, the Particle Filter, and the Unscented Particle Filter and also developed a sensor fusion method to track multi-ballistic targets during boost phase using multi-sensors. This research makes use of the IMPULSE© simulation tool for generating ballistic missile flight profiles which serve as the test data for the tracking. We use the distance error as a performance measure. Our experimental results of comparing the performance of three nonlinear filters shows that the UPF has the best performance, but that the UKF is the best choice in terms of time, complexity, and per- formance, while the standard PF does not converge. The fusion of data from several sensors provides significant advantages over sin- gle source data. In addition to the statistical advantage gained by combining same-source data, the use of multiple types of sensors may increase the accuracy with which a quantity can be observed and characterized. In this study, seven active ground-based radar sen- sors are used to track the ballistic missile during boost phase. We consider the Informa- tion Filter which is an efficient form to fuse the information from multiple sensors with- out information loss. To evaluate the performance of the Extended Information Filter, we compare with the track score function proposed in Patsikas's previous work in which the calculation of the track scoring function is to identify the sensor with the best track file. This study tells us that the result of multiple sensor fusion outperforms the performance of one sensor with the best performance and the information form of filter has an advan- tage in constructing a more efficient fusion architecture which is hierarchical and easily lends itself to decentralized processing. 75 VII. FUTURE WORK In future work, the fusion architecture needs to be extended to a more efficient, fully decentralized form, so we may evaluate its performance and efficiency with those of the hierarchical fusion architecture. In addition, we should apply the Information Filter based on the Unscented Kalman Filter to track the Boost-phase ballistic missile, instead of the Extended Information Filter. 76 LIST OF REFERENCES [1] A. Farina, B. Ristic, D. Benvenuti, "Tracking a ballistic Target: Comparison of Several Nonlinear Filters:, IEEE Transactions on AES, 38 (3), 854-867, July 2002. [2] M . G. S. Bruno, A. Pavlov, "Improved Particle Filters for Ballistic Target track- ing:, Optical Engineering, 43(6), 1-14, June 2004. [3] B. G. Saulson, K. C. Chang, "Non-linear Estimation Comparison for Ballistic Tar- get Tracking", ICASSO, 11-705-708, 2004. [4] G. M. Siouris, G. Chen, J. Wang, "Tracking and Incoming Ballistic Missile Using an Extended Interval Kalman Filter", transactions on AES, 33(1), 232-239, Jan 1997. [5] D. K. Barton et al., "Report of the American Physical Society Study Group on Boost-Phase Intercept Systems for National Missile Defense: Scientific and Tech- nical Issues", Rev. Mod. Phys, 76, SI 2004. [6] T. Lefebvre, H. Bruyninckx, J. De Schutter, "Kalman Filters for Nonlinear Sys- tems: a Comparison of Performance", Internal report 01R033, Kaatholieke Univer- siteit Leuven, Oct. 2001. [7] R. Van der Merwe, N. Feritas, A. Doucet, E. Wan, "The Unscented Particle Filter", Technical Report, CUED/F— INFENG/TR 380, Cambridge University Engineering Department, Aug. 2000. => unscented particle filteing [8] S. J. Julier, and J. K. Uhlmann, "A new extension of the Kalman filter to nonlinear systems, Proc. of AeroSense: The 11th international Symposium on Aero- space/Defence Sensing, Simulation and Controls, Orlando, Florida, Vol. Multi Sen- sor Fusion, Tracking and Resource Management II. [9] S. J. Julier, and J. K. Uhlman, "Reduced sigma point filters for the propagation of means and covariance through nonlinear transformations", http i//citeseer.n).nec.com/julier98rediced.html . [10] N.J Gordon, D.J. Salmond, and A.F.M. Smith, "Novel approach to Nonlinear/non- Gaussian Bayesian State Estimation", IEE Proceedings-F, 140(2), Apr. 1993. 77 [11] D. Crisan and A. Doucet, 'A Survey of Convergence Results on Particle Filtering Methods for Practitioners", IEEE Trans. On Signal Processing, 50(3), 736-746, Mar. 2002. [12] A.G.O. Mutambara, Decentralized Estimation and Control for Multisensor systems. CRC Press, 1998. [13] J. Manyika, S. Grime, and H. F. Durrant-Whyte, "A formally specified decentral- ized architecture for multi-sensor data fusion", In Transputing '91, pp.609-628, IOS Press, 1991. [14] D.L. Hall, Mathematical Techniques in Multi-sensor Data Fusion, Artech House, London, 1992. [15] S. Grime and Durrant-Whyte, "Data fusion in decentralized sensor networks", Con- trol Eng. Practice, Vol.2, No.5, pp.849-863, 1994. [16] B. Rakdham, "Efficient Multiple Hypothesis Track Processing of Boost-Phase Bal- listic Missiles using IMPULSE-Generated Threat Models", MSEE Thesis, Naval Postgraduate School, CA, September 2006. [17] E. Garrido, "Graphical user interface for a physical optics radar cross section pre- diction code", Master's Thesis, Naval Postgraduate School, Monterey, CA, 2000. [18] S. Blackman and R. Popoli, Moderen Tracking System, Norward, NY: Artech House 1999 Publishers. [19] P.F. Singer and D.M. Sasaki, "The Heavy tailed Distribution of a Common CFAR Detector", Signal and Data Processing of Small Targets, 1995, Proc. SPIE 2561, 1995, pp. 124- 140. [20] D. Patsikas, "Track Score Processing of Multiple Dissimilar Sensors", MSEE The- sis, Naval Postgraduate School, CA, June 2007. [21] D.B.Reid, "An algorithm for tracking multiple targets", IEEE Transaction on Au- tomatic Control, Vol.AC-24, No.6, December 1979. 78 [22] V. Nagarajan, M.R.Chidambara, R.N.Sharma, "New Approach to Improved Detec- tion and Tracking Performance in Track- While-Scan Radars", IEE Proceedings, Vol.134, Pt. F, No.l, February, 1987. [23] I.J.Cox, S.L.Hingorani, "An efficient implementation of Reid's multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.18, No.2, February 1996. [24] Y. Bar-Shalom and X. Li, Estimation and Tracking: Principles, Techniques, and Software, Artech House, Inc., Norwood, MA, 1993. [25] L.D.Stone, C.A. Barlow, and T.L.Corwin, Bayesian Multiple Target Tracking, Ar- tech House Inc. Maryland, 1999. 79 Initial Distribution List 1. Defense Technical Information Center 8725 John J. Kingman Rd., STE 0944 Ft. Belvoir,VA 22060-6218 2. Dudley Knox Library, Code 013 Naval Postgraduate School Monterey, CA 93943-5 1 00 3. Research Office, Code 09 Naval Postgraduate School Monterey, CA 93943-5 138 4. Phillip E. Pace, Professor pepace@nps.edu Department of Electrical and Computer Engineering Naval Postgraduate School Monterey, CA 93943 5. James Bret Michael Bmichael@nps.edu Department of Computer Science Naval Postgraduate School Monterey, CA 93943-51 18 6. Dr. Carlo Kopp Carlo.kopp@iinet.net.au Air Power Australia 80 DUDLEY KNOX LIBRARY 3 2768 00482294